Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
1.
Chinese Journal of Radiation Oncology ; (6): 266-271, 2022.
Article in Chinese | WPRIM | ID: wpr-932665

ABSTRACT

Objective:Hybrid attention U-net (HA-U-net) neural network was designed based on U-net for automatic delineation of craniospinal clinical target volume (CTV) and the segmentation results were compared with those of U-net automatic segmentation model.Methods:The data of 110 craniospinal patients were reviewed, Among them, 80 cases were selected for the training set, 10 cases for the validation set and 20 cases for the test set. HA-U-net took U-net as the basic network architecture, double attention module was added at the input of U-net network, and attention gate module was combined in skip-connection to establish the craniospinal automatic delineation network model. The evaluation parameters included Dice similarity coefficient (DSC), Hausdorff distance (HD) and precision.Results:The DSC, HD and precision of HA-U-net network were 0.901±0.041, 2.77±0.29 mm and 0.903±0.038, respectively, which were better than those of U-net (all P<0.05). Conclusion:The results show that HA-U-net convolutional neural network can effectively improve the accuracy of automatic segmentation of craniospinal CTV, and help doctors to improve the work efficiency and the consistent delineation of CTV.

2.
Chinese Journal of Radiation Oncology ; (6): 43-48, 2022.
Article in Chinese | WPRIM | ID: wpr-932626

ABSTRACT

Objective:Due to the low contrast between tumors and surrounding tissues in CBCT images, this study was designed to propose an automatic segmentation method for central lung cancer in CBCT images.Methods:There are 221 patients with central lung cancer were recruited. Among them, 176 patients underwent CT localization and 45 patients underwent enhanced CT localization. The enhanced CT images were set as the lung window and mediastinal window, and elastic registration was performed with the first CBCT validation images to obtain the paired data set. The CBCT images could be transformed into" enhanced CT" under the lung window and mediastinal window by loading the paired data sets into cycleGAN network for style transformation. Finally, the transformed images were loaded into the UNET-attention network for deep learning of GTV. The results of segmentation were evaluated by Dice similarity coefficient (DSC), Hausdorff distance (HD) and the area under the receiver operating characteristic curve (AUC).Results:The contrast between tumors and surrounding tissues was significantly improved after style transformation. The DSC value of cycleGAN+ UNET-attention network was 0.78±0.05, HD value was 9.22±3.42 and AUC value was 0.864, respectively.Conclusion:The cycleGAN+ UNET-attention network can effectively segment central lung cancer in CBCT images.

3.
Chinese Journal of Radiological Medicine and Protection ; (12): 697-703, 2022.
Article in Chinese | WPRIM | ID: wpr-956847

ABSTRACT

Objective:To explore the effects of multimodal imaging on the performance of automatic segmentation of glioblastoma targets for radiotherapy based on a deep learning approach.Methods:The computed tomography (CT) images and the contrast-enhanced T1 weighted (T1C) sequence and the T2 fluid attenuated inversion recovery (T2- FLAIR) sequence of magnetic resonance imaging (MRI) of 30 patients with glioblastoma were collected. The gross tumor volumes (GTV) and their corresponding clinical target volumes CTV1 and CTV2 of the 30 patients were manually delineated according to the criteria of the Radiation Therapy Oncology Group (RTOG). Moreover, four different datasets were designed, namely a unimodal CT dataset (only containing the CT sequences of 30 cases), a multimodal CT-T1C dataset (containing the CT and T1C sequences of 30 cases), a multimodal CT-T2-FLAIR dataset (containing the CT and T2- FLAIR sequences of the 30 cases), and a trimodal CT-MRI dataset (containing the CT, T1C, and T2- FLAIR sequences of 30 cases). For each dataset, the data of 25 cases were used for training the modified 3D U-Net model, while the data of the rest five cases were used for testing. Furthermore, this study evaluated the segmentation performance of the GTV, CTV1, and CTV2 of the testing cases obtained using the 3D U-Net model according to the indices including Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and relative volume error (RVE).Results:The best automatic segmentation result of GTV were achieved using the CT-MRI dataset. Compared with the segmentation result using the CT dataset (DSC: 0.94 vs. 0.79, HD95: 2.09 mm vs. 12.33 mm, and RVE: 1.16% vs. 20.14%), there were statistically significant differences in DSC ( t=3.78, P<0.05) and HD95 ( t=4.07, P<0.05) obtained using the CT-MRI dataset. Highly consistent automatic segmentation result of CTV1 and CTV2 were also achieved using the CT-MRI dataset (DSC: 0.90 vs. 0.91, HD95: 3.78 mm vs. 2.41 mm, RVE: 3.61% vs. 5.35%). However, compared to the CT dataset, there were no statistically significant differences in DSC and HD95 of CTV1 and CTV2 ( P>0.05). Additionally, the 3D U-Net model yielded some errors in predicting the upper and lower bounds of GTV and the adjacent organs (e.g., the brainstem and eyeball) of CTV2. Conclusions:The modified 3D U-Net model based on the multimodal CT-MRI dataset can achieve better segmentation result of glioblastoma targets and its application potentially benefits clinical practice.

4.
Journal of Biomedical Engineering ; (6): 722-731, 2021.
Article in Chinese | WPRIM | ID: wpr-888233

ABSTRACT

The background of abdominal computed tomography (CT) images is complex, and kidney tumors have different shapes, sizes and unclear edges. Consequently, the segmentation methods applying to the whole CT images are often unable to effectively segment the kidney tumors. To solve these problems, this paper proposes a multi-scale network based on cascaded 3D U-Net and DeepLabV3+ for kidney tumor segmentation, which uses atrous convolution feature pyramid to adaptively control receptive field. Through the fusion of high-level and low-level features, the segmented edges of large tumors and the segmentation accuracies of small tumors are effectively improved. A total of 210 CT data published by Kits2019 were used for five-fold cross validation, and 30 CT volume data collected from Suzhou Science and Technology Town Hospital were independently tested by trained segmentation models. The results of five-fold cross validation experiments showed that the Dice coefficient, sensitivity and precision were 0.796 2 ± 0.274 1, 0.824 5 ± 0.276 3, and 0.805 1 ± 0.284 0, respectively. On the external test set, the Dice coefficient, sensitivity and precision were 0.817 2 ± 0.110 0, 0.829 6 ± 0.150 7, and 0.831 8 ± 0.116 8, respectively. The results show a great improvement in the segmentation accuracy compared with other semantic segmentation methods.


Subject(s)
Humans , Kidney Neoplasms/diagnostic imaging , Neural Networks, Computer , Specimen Handling , Tomography, X-Ray Computed
5.
Chinese Journal of Medical Instrumentation ; (6): 573-579, 2021.
Article in Chinese | WPRIM | ID: wpr-922062

ABSTRACT

OBJECTIVE@#To explore the feasibility of using the bidirectional local distance based medical similarity index (MSI) to evaluate automatic segmentation on medical images.@*METHODS@#Taking the intermediate risk clinical target volume for nasopharyngeal carcinoma manually segmented by an experience radiation oncologist as region of interest, using Atlas-based and deep-learning-based methods to obtain automatic segmentation respectively, and calculated multiple MSI and Dice similarity coefficient (DSC) between manual segmentation and automatic segmentation. Then the difference between MSI and DSC was comparatively analyzed.@*RESULTS@#DSC values for Atlas-based and deep-learning-based automatic segmentation were 0.73 and 0.84 respectively. MSI values for them varied between 0.29~0.78 and 0.44~0.91 under different inside-outside-level.@*CONCLUSIONS@#It is feasible to use MSI to evaluate the results of automatic segmentation. By setting the penalty coefficient, it can reflect phenomena such as under-delineation and over-delineation, and improve the sensitivity of medical image contour similarity evaluation.


Subject(s)
Feasibility Studies , Radiotherapy Planning, Computer-Assisted
6.
Chinese Journal of Radiation Oncology ; (6): 917-923, 2021.
Article in Chinese | WPRIM | ID: wpr-910492

ABSTRACT

Objective:To evaluate the application of a multi-task learning-based light-weight convolution neural network (MTLW-CNN) for the automatic segmentation of organs at risk (OARs) in thorax.Methods:MTLW-CNN consisted of several layers for sharing features and 3 branches for segmenting 3 OARs. 497 cases with thoracic tumors were collected. Among them, the computed tomography (CT) images encompassing lung, heart and spinal cord were included in this study. The corresponding contours delineated by experienced radiation oncologists were ground truth. All cases were randomly categorized into the training and validation set ( n=300) and test set ( n=197). By applying MTLW-CNN on the test set, the Dice similarity coefficients (DSCs) of 3 OARs, training and testing time and space complexity (S) were calculated and compared with those of Unet and DeepLabv3+ . To evaluate the effect of multi-task learning on the generalization performance of the model, 3 single-task light-weight CNNs (STLW-CNNs) were built. Their structures were totally the same as the corresponding branches in MTLW-CNN. After using the same data and algorithm to train STLW-CNN, the DSCs were statistically compared with MTLW-CNN on the testing set. Results:For MTLW-CNN, the averages (μ) of lung, heart and spinal cord DSCs were 0.954, 0.921 and 0.904, respectively. The differences of μ between MTLW-CNN and other two models (Unet and DeepLabv3+ ) were less than 0.020. The training and testing time of MTLW-CNN were 1/3 to 1/30 of that of Unet and DeepLabv3+ . S of MTLW-CNN was 1/42 of that of Unet and 1/1 220 of that of DeepLabv3+ . The differences of μ and standard deviation (σ) of lung and heart between MTLW-CNN and STLW-CNN were approximately 0.005 and 0.002. The difference of μ of spinal cord was 0.001, but σof STLW-CNN was 0.014 higher than that of MTLW-CNN.Conclusions:MTLW-CNN spends less time and space on high-precision automatic segmentation of thoracic OARs. It can improve the application efficiency and generalization performance of the models.

7.
Chinese Journal of Radiation Oncology ; (6): 882-887, 2021.
Article in Chinese | WPRIM | ID: wpr-910486

ABSTRACT

Objective:To evaluate the application value of deep deconvolutional neural network (DDNN) model for automatic segmentation of target volume and organs at risk (OARs) in patients with nasopharngeal carcinoma (NPC).Methods:Based on the CT images of 800 NPC patients, an end-to-end automatic segmentation model was established based on DDNN algorithm. Ten newly diagnosed with NPC were allocated into the test set. Using this DDNN model, 10 junior physicians contoured the region of interest (ROI) on 10 patients by using both manual contour (MC) and DDNN deep learning-assisted contour (DLAC) methods independently. The accuracy of ROI contouring was evaluated by using the DICE coefficient and mean distance to agreement (MDTA). The coefficient of variation (CV) and standard distance deviation (SDD) were rendered to measure the inter-observer variability or consistency. The time consumed for each of the two contouring methods was also compared.Results:DICE values of gross target volume (GTV) and clinical target volume (CTV), MDTA of GTV and CTV by using DLAC were 0.67±0.15 and 0.841±0.032, (0.315±0.23) mm and (0.032±0.098) mm, respectively, which were significantly better than those in the MC group (all P<0.001). Except for the spinal cord, lens and mandible, DLAC improved the DICE values of the other OARs, in which mandible had the highest DICE value and optic chiasm had the lowest DICE value. Compared with the MC group, GTV, CTV, CV and SDD of OAR were significantly reduced (all P<0.001), and the total contouring time was significantly shortened by 63.7% in the DLAC group ( P<0.001). Conclusion:Compared with MC, DLAC is a promising method to obtain superior accuracy, consistency, and efficiency for the GTV, CTV and OAR in NPC patients.

8.
Journal of Biomedical Engineering ; (6): 80-88, 2021.
Article in Chinese | WPRIM | ID: wpr-879252

ABSTRACT

The three-dimensional (3D) liver and tumor segmentation of liver computed tomography (CT) has very important clinical value for assisting doctors in diagnosis and prognosis. This paper proposes a tumor 3D conditional generation confrontation segmentation network (T3scGAN) based on conditional generation confrontation network (cGAN), and at the same time, a coarse-to-fine 3D automatic segmentation framework is used to accurately segment liver and tumor area. This paper uses 130 cases in the 2017 Liver and Tumor Segmentation Challenge (LiTS) public data set to train, verify and test the T3scGAN model. Finally, the average Dice coefficients of the validation set and test set segmented in the 3D liver regions were 0.963 and 0.961, respectively, while the average Dice coefficients of the validation set and test set segmented in the 3D tumor regions were 0.819 and 0.796, respectively. Experimental results show that the proposed T3scGAN model can effectively segment the 3D liver and its tumor regions, so it can better assist doctors in the accurate diagnosis and treatment of liver cancer.


Subject(s)
Humans , Image Processing, Computer-Assisted , Liver Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
9.
Journal of Biomedical Engineering ; (6): 136-141, 2020.
Article in Chinese | WPRIM | ID: wpr-788886

ABSTRACT

The segmentation of organs at risk is an important part of radiotherapy. The current method of manual segmentation depends on the knowledge and experience of physicians, which is very time-consuming and difficult to ensure the accuracy, consistency and repeatability. Therefore, a deep convolutional neural network (DCNN) is proposed for the automatic and accurate segmentation of head and neck organs at risk. The data of 496 patients with nasopharyngeal carcinoma were reviewed. Among them, 376 cases were randomly selected for training set, 60 cases for validation set and 60 cases for test set. Using the three-dimensional (3D) U-NET DCNN, combined with two loss functions of Dice Loss and Generalized Dice Loss, the automatic segmentation neural network model for the head and neck organs at risk was trained. The evaluation parameters are Dice similarity coefficient and Jaccard distance. The average Dice Similarity coefficient of the 19 organs at risk was 0.91, and the Jaccard distance was 0.15. The results demonstrate that 3D U-NET DCNN combined with Dice Loss function can be better applied to automatic segmentation of head and neck organs at risk.

10.
Journal of Biomedical Engineering ; (6): 311-316, 2020.
Article in Chinese | WPRIM | ID: wpr-828165

ABSTRACT

When applying deep learning to the automatic segmentation of organs at risk in medical images, we combine two network models of Dense Net and V-Net to develop a Dense V-network for automatic segmentation of three-dimensional computed tomography (CT) images, in order to solve the problems of degradation and gradient disappearance of three-dimensional convolutional neural networks optimization as training samples are insufficient. This algorithm is applied to the delineation of pelvic endangered organs and we take three representative evaluation parameters to quantitatively evaluate the segmentation effect. The clinical result showed that the Dice similarity coefficient values of the bladder, small intestine, rectum, femoral head and spinal cord were all above 0.87 (average was 0.9); Jaccard distance of these were within 2.3 (average was 0.18). Except for the small intestine, the Hausdorff distance of other organs were less than 0.9 cm (average was 0.62 cm). The Dense V-Network has been proven to achieve the accurate segmentation of pelvic endangered organs.


Subject(s)
Humans , Algorithms , Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Neural Networks, Computer , Organs at Risk , Pelvis , Tomography, X-Ray Computed
11.
Journal of Biomedical Engineering ; (6): 670-675, 2020.
Article in Chinese | WPRIM | ID: wpr-828120

ABSTRACT

Compared with the previous automatic segmentation neural network for the target area which considered the target area as an independent area, a stacked neural network which uses the position and shape information of the organs around the target area to regulate the shape and position of the target area through the superposition of multiple networks and fusion of spatial position information to improve the segmentation accuracy on medical images was proposed in this paper. Taking the Graves' ophthalmopathy disease as an example, the left and right radiotherapy target areas were segmented by the stacked neural network based on the fully convolutional neural network. The volume Dice similarity coefficient (DSC) and bidirectional Hausdorff distance (HD) were calculated based on the target area manually drawn by the doctor. Compared with the full convolutional neural network, the stacked neural network segmentation results can increase the volume DSC on the left and right sides by 1.7% and 3.4% respectively, while the two-way HD on the left and right sides decrease by 0.6. The results show that the stacked neural network improves the degree of coincidence between the automatic segmentation result and the doctor's delineation of the target area, while reducing the segmentation error of small areas. The stacked neural network can effectively improve the accuracy of the automatic delineation of the radiotherapy target area of Graves' ophthalmopathy.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Neural Networks, Computer , Tomography, X-Ray Computed
12.
Chinese Journal of Radiation Oncology ; (6): 197-202, 2020.
Article in Chinese | WPRIM | ID: wpr-868579

ABSTRACT

Objective In this study,the deep learning algorithm and the commercial planning system were integrated to establish and validate an automatic segmentation platform for clinical target volume (CTV) and organs at risk (OARs) in breast cancer patients.Methods A total of 400 patients with left and right breast cancer receiving radiotherapy after breast-conserving surgery in Cancer Hospital CAMS were enrolled in this study.A deep residual convolutional neural network was used to train CTV and OARs segmentation models.An end-to-end deep learning-based automatic segmentation platform (DLAS) was established.The accuracy of the DLAS platform delineation was verified using 42 left breast cancer and 40 right breast cancer patients.The overall Dice Similarity Coefficient (DSC) and the average Hausdorff Distance (AHD) were calculated.The relationship between the relative layer position and the DSC value of each layer (DSC_s) was calculated and analyzed layer-by-layer.Results The mean overall DSC and AHD of global CTV in left/right breast cancer patients were 0.87/0.88 and 9.38/8.71 mm.The average overall DSC and AHD range for all OARs in left/right breast cancer patients were ranged from 0.86 to 0.97 and 0.89 to 9.38 mm.The layer-by-layer analysis of CTV and OARs reached 0.90 or above,indicating that the doctors were only required to make slight or no modification,and the DSC_s ≥ 0.9 of CTV automatic delineation accounted for approximately 44.7% of the layers.The automatic delineation range for OARs was 50.9%-89.6%.For DSC_s < 0.7,the DSC_s values of CTV and the regions of interest other than the spinal cord were significantly decreased in the boundary regions on both sides (layer positions 0-0.2,and 0.8-1.0),and the level of decrease toward the edge was more pronounced.The spinal cord was delineated in a full-scale manner,and no significant decrease in DSC_s was observed in a particular area.Conclusions The end-to-end automatic segmentation platform based on deep learning can integrate the breast cancer segmentation model and achieve excellent automatic segmentation effect.In the boundary areas on both sides of the superior and inferior directions,the consistency of the delineation decreases more obviously,which needs to be further improved.

13.
Chinese Journal of Radiation Oncology ; (6): 292-296, 2019.
Article in Chinese | WPRIM | ID: wpr-745298

ABSTRACT

Objective To evaluate the accuracy and validate the feasibility of auto-segmentation based on self-registration and Atlas in adaptive radiotherapy for cervical cancer using MIM-Maestro software.Methods The CT scan images and delineation results of 60 cervical cancer patients were obtained to establish the Atlas template database.The planning CT (pCT) and replanning CT (rCT) images were randomly selected from 15 patients for the contouring of clinical target volume (CTV) and organs at risk (OAR) by an experienced radiation oncologist.The rCT images of 15 patients were auto-contoured using Atlas-based auto-segmentation (Atlas group),and mapping contours from the pCT to the rCT images was performed by rigid and deformable image registration (rigid group and deformable group).The time for the three methods of auto-segmentation was also recorded.The similarity of the auto-contours and reference contours was assessed using dice similarity coefficient (DSC),overlap index (OI),the average hausdorff distance (AHD) and the deviation of centroid (DC),and the results were statistically compared among three groups by using one-way analysis of variance.Results The mean time was 89.2 s,22.4 s and 42.6 s in the Atlas,rigid and deformable groups respectively.The DSC,OI and AHD for the CTV and rectum in the rigid and deformable groups significantly differed from those in the Atlas group (all P<0.001).In the rigid and deformable groups,the OI for the intestine significantly differed from that in the Atlas group.The mean DSC for CTV was 0.89 in the rigid and deformable groups,and 0.76 in the Atlas group.The optimal delineation of the bladder,pelvis and femoral heads was obtained in the deformable group.Conclusions AIl three methods of auto-segmentation can automatically and rapidly contour the CTV and OARs.The performance in the deformable group is better than that in the rigid and Atlas groups.

14.
Journal of Biomedical Engineering ; (6): 481-487, 2018.
Article in Chinese | WPRIM | ID: wpr-687605

ABSTRACT

Liver cancer is a common type of malignant tumor in digestive system. At present, computed tomography (CT) plays an important role in the diagnosis and treatment of liver cancer. Segmentation of tumor lesions based on CT is thus critical in clinical diagnosis and treatment. Due to the limitations of manual segmentation, such as inefficiency and subjectivity, the automatic and accurate segmentation based on advanced computational techniques is becoming more and more popular. In this review, we summarize the research progress of automatic segmentation of liver cancer lesions based on CT scans. By comparing and analyzing the results of experiments, this review evaluate various methods objectively, so that researchers in related fields can better understand the current research progress of liver cancer segmentation based on CT scans.

15.
Chinese Journal of Experimental Ophthalmology ; (12): 51-55, 2014.
Article in Chinese | WPRIM | ID: wpr-636283

ABSTRACT

Background Evaluation of intra-retinal layer thickness plays an important role in the diagnosis and monitor of various eye diseases,and spectral domain optical coherence tomography (OCT) is a frequently used tool.Software analysis method was used to measure the retinal thickness in previous study,but the study on the reliability of automatic layered software is lack.Objective This study was to evaluate the repeatability and reproducibility of thickness profile measurement of intra-retinal layers determined by an automated algorithm applied to OCT images from RTVue100 OCT instrument.Methods In this prospective cross-sectional study,retinal thickness images at 6 mm around fovea were obtained from 18 right eyes of 18 normal subjects with RTVue100 OCT instrument.The retinal images were segmented into retinal nerve fiber layer (RNFL),ganglion cell layer and inner plexiform layer(GCL+IPL),inner nuclear layer (INL),outer plexiform layer (OPL),outer nuclear layer (ONL),inner segment (IS),outer segment (OS) and retinal pigment epithelial (RPE) layer using automated algorithm method.Then Matlab software was used to analyze the measuring outcome.Interclass correlation coefficient (ICC) and coefficients of reproducibility (COR) were calculated from the results of two-time examination by the same examiner to evaluate the repeatability and from the results of two different examiners to assess the reproducibility.Written informed consent was obtained from each subject prior to any medical procedure.Results The entire retinal thickness measured by RTVue-OCT was (303.22± 14.10) μm in the horizontal meridian and (306.68 ± 13.32) μm in the vertical meridian,with the maximum values of retinal thickness in the GCL+ IPL and ONL.Whether in the horizontal meridian or in the vertical meridian,the ICC and COR were <0.60 in the OPL,IS and OS;while those in the RNFL,GCL+IPL,INL,ONL and RPE layer were >0.70.Conclusions RTVue OCT with automated algorithm is a useful and reliable approach to the measurement of intra-retinal layer thickness.Automated segmentation can offer accurate and repeatable thickness profile of OCT retinal image.This method may improve the diagnosis and monitoring of retinal diseases.

16.
Journal of Cerebrovascular and Endovascular Neurosurgery ; : 358-363, 2014.
Article in English | WPRIM | ID: wpr-55943

ABSTRACT

OBJECTIVE: Several modalities are available for volumetric measurement of the intracranial aneurysm. We discuss the challenges involved in manual segmentation, and analyze the application of alternative methods using automatic segmentation and geometric formulae in measurement of aneurysm volumes and coil packing density. METHODS: The volumes and morphology of 38 aneurysms treated with endovascular coiling at a single center were measured using three-dimensional rotational angiography (3DRA) reconstruction software using automatic segmentation. Aneurysm volumes were also calculated from their height, width, depth, size of neck, and assumed shape in 3DRA images using simple geometric formulae. The aneurysm volumes were dichotomized as "small" or "large" using the median volume of the studied population (54 mm3) measured by automatic segmentation as the cut-off value for further statistical analysis. RESULTS: A greater proportion of aneurysms were categorized as being "small" when geometric formulae were applied. The median aneurysm volumes obtained were 54.5 mm3 by 3DRA software, and 30.6 mm3 using mathematical equations. An underestimation of aneurysm volume with a resultant overestimation in the calculated coil packing density (p = 0.002) was observed. CONCLUSION: Caution must be exercised in the application of simple geometric formulae in the management of intracranial aneurysms as volumes may potentially be underestimated and packing densities falsely elevated. Future research should focus on validation of automatic segmentation in volumetric measurement and improving its accuracy to enhance its application in clinical practice.


Subject(s)
Aneurysm , Angiography , Intracranial Aneurysm , Neck
17.
Journal of the Korean Society of Magnetic Resonance in Medicine ; : 243-252, 2012.
Article in Korean | WPRIM | ID: wpr-189237

ABSTRACT

PURPOSE: The aim of this study was to evaluate the variations of brain volumetry between the different MR scanners or the different institutes. MATERIALS AND METHODS: Ten normal subjects were scanned at four different MR scanners, two of them were the same models, to measure inter-MR scanner variations using intraclass correlation coefficient (ICC), coefficient of variation (CV) and percent volume difference (PVD) and to calculate minimal thresholds to detect the significant volumetric changes in gray matter and subcortical regions. RESULTS: Averaged statistical reliability (ICC = 0.837) and volumetric variation (CV = 4.310%) in all segmented regions were observed on overall MR scanners. Comparing the segmented volumes with PVD between two MR scanners, volumetric differences on same models were the lowest (PVD = 3.611%) and volume thresholds were calculated with 7.168%. PVD results and thresholds values on systemically different MR scanners were evaluated with 5.785% and 11.340% respectively. CONCLUSION: Authors conclude that the reliability of brain volumetry is not so high. Calibration studies of MRI system and image processing are essential to reduce the volumetric variability. Additionally, frameworks comprised of database and algorithms with high-speed image processing are also required for the efficient image data management.


Subject(s)
Brain , Calibration
18.
Braz. j. med. biol. res ; 43(1): 77-84, Jan. 2010. tab, ilus
Article in English | LILACS | ID: lil-535647

ABSTRACT

The loss of brain volume has been used as a marker of tissue destruction and can be used as an index of the progression of neurodegenerative diseases, such as multiple sclerosis. In the present study, we tested a new method for tissue segmentation based on pixel intensity threshold using generalized Tsallis entropy to determine a statistical segmentation parameter for each single class of brain tissue. We compared the performance of this method using a range of different q parameters and found a different optimal q parameter for white matter, gray matter, and cerebrospinal fluid. Our results support the conclusion that the differences in structural correlations and scale invariant similarities present in each tissue class can be accessed by generalized Tsallis entropy, obtaining the intensity limits for these tissue class separations. In order to test this method, we used it for analysis of brain magnetic resonance images of 43 patients and 10 healthy controls matched for gender and age. The values found for the entropic q index were 0.2 for cerebrospinal fluid, 0.1 for white matter and 1.5 for gray matter. With this algorithm, we could detect an annual loss of 0.98 percent for the patients, in agreement with literature data. Thus, we can conclude that the entropy of Tsallis adds advantages to the process of automatic target segmentation of tissue classes, which had not been demonstrated previously.


Subject(s)
Adult , Female , Humans , Male , Brain/pathology , Magnetic Resonance Imaging/methods , Multiple Sclerosis/pathology , Organ Size , Algorithms , Case-Control Studies , Entropy
19.
Chinese Journal of Medical Imaging Technology ; (12): 1293-1295, 2009.
Article in Chinese | WPRIM | ID: wpr-473345

ABSTRACT

Objective To establish a new automatic lung segmentation method in order to deal with the omission of pleural nodules and pulmonary vessels. Methods Lung parenchyma were extracted from chest CT images with the inversed operation of 2D region growing and connected area classification, then the contours and locating the contour points were traced with scan line searching. Finally, the parameters of lung contour points were analyzed to locate the contours distorted by nodules, and curve spline was used to correct distorted contours. Results The experimental results of many sets of CT images verified that the technique proposed was effective. The comparison with other contour correction algorithm verified that line searching contour correction was superior. Conclusion The proposed algorithm includes tumors in the segment results, and confirms the integrality, veracity, real-time quality of this auto-segmentation method.

20.
Korean Journal of Anatomy ; : 305-312, 2006.
Article in English | WPRIM | ID: wpr-654194

ABSTRACT

Mouse anatomy is fundamental knowledge for researchers who perform biomedical experiments with mice. The purpose of our research is to present the serially sectioned images and segmented images of the mouse to produce three-dimensional images of the mouse, which are helpful in learning mouse anatomy. Using a cryomacrotome, a couple of male and female mice were transversely serially sectioned at 0.5 mm intervals to make sectioned surfaces. The sectioned surfaces were digitalized to make serially sectioned images. In the serially sectioned images of the female mouse, 14 structures including skin and bones were semi-automatically segmented on Adobe Photoshop software to make segmented images. The serially sectioned images and segmented images were stacked to make sagittal and coronal images, which were used for verifying the serially sectioned images and segmented images. In this ongoing research, the segmented images of male mouse will be added. All serially sectioned images and segmented images of the mouse will be presented worldwide. These images are expected to be used by many researchers for making three-dimensional images and virtual dissection software of the mouse, which are helpful in comprehending the stereoscopic morphology of the mouse's structures.


Subject(s)
Animals , Female , Humans , Male , Mice , Imaging, Three-Dimensional , Learning , Skin
SELECTION OF CITATIONS
SEARCH DETAIL